perm filename BLOCKS.226[F75,JMC]1 blob sn#195391 filedate 1976-01-07 generic text, type T, neo UTF8
00100	PROBLEMS WITH THE BLOCKS WORLD
00200	
00300	
00400		The  object  of  this note is to outline some of the problems
00500	that AI must solve by referring them  to  the  much  studied  "blocks
00600	world"  of  (Winograd  197x).   We  shall start with the most general
00700	blocks problem and simplify it step-by-step until we reach the  level
00800	of problem that has actually been solved.
00900	
01000		Suppose  that  a group of people are to build a house jointly
01100	sharing in the investment, the work, and the proceeds. We would  like
01200	to program a robot and send it forth to take part in the enterprise.
01300	
01400		#.  We must start by providing the robot with motivation, and
01500	suppose that it wishes to spend not more than 6 months at the job and
01600	invest  not  more than $10,000 and when the house is sold to maximize
01700	its rate of return on its total investment counting any labor it puts
01800	in  at  $20  per  hour.   (It  is a hard working robot and values its
01900	labor).  It  must  strike  an  appropriate  bargain  with  its  human
02000	collaborators.
02100	
02200		There  is  no  difficulty in programming the robot to compute
02300	the return on investment given an agreed  share,  the  price  of  the
02400	house  and  its inputs in money and labor. However, we don't know the
02500	rules that would allow it  to  compute  the  necessary  probabilities
02600	given  the  information  that  is available in the real world.  If we
02700	could limit the factors that are to be taken into  account  we  could
02800	probably  concoct  a  rule  that would be no worse than present human
02900	performance and perhaps better.  However, our program  would  require
03000	prepared  inputs  and it would have no way of taking into account new
03100	information such as the state of union  contract  negotiations  of  a
03200	supplier.
03300	
03400		The  first  difficulty  that  we  shall consider is forming a
03500	model of the motivations of the  robot's  collaborators  so  that  it
03600	could come to an agreement with them.
03700	
03800		(Let  me point out that there are two kinds of models one can
03900	form  of  other  people,  each  of  which  is  appropriate  in   some
04000	circumstances.  The simpler kind of model regards the other person as
04100	an automaton that responds to certain stimuli with  certain  actions.
04200	This stimulus response relation may be imperfectly known.  The second
04300	kind of model ascribes goals and/or a utility function to  the  other
04400	being.   In  that  case  one  can  ask  what actions he believes will
04500	achieve his goals or what actions on our part will benefit it.  Using
04600	the  automaton model should not be regarded pejoratively; it is often
04700	appropriate.  In my role as a  classroom  teacher,  I  prefer  to  be
04800	regarded as an automaton that will reward good work appropriately and
04900	will answer questions appropriately.  I  don't  especially  want  the
05000	students  speculating  about  my  inner  motivations.  In other human
05100	relationships,  I  prefer  having  my  motivations  and  my   welfare
05200	considered.)
05300	
05400		At  present,  no-one  has  built  into  a  computer program a
05500	reasonable model of either kind for human behavior. Therefore,  let's
05600	give  up  letting our robot be an equal partner in the building co-op
05700	and make it simply a servant.
05800	
05900		#. In his role as servant, the robot must communicate with
06000	the other workers.  We don't know how to program free communication
06100	in natural language, so let's give that up.